Why You May Not Want to Correct “Bias” in an AI Model
Yoav Goldberg publishes a critique of The FAccT paper “On the Dangers of Stochastic Parrots: Can Language Models be Too Big” by Bender, Gebru, McMillan-Major and Shmitchell. His two criticisms are:
1. The paper attacks the wrong target
The real criticism is not about model size, it’s about any language model. Framing it about size is harmful.
Any model, no matter how big or small, will reflect the input you give it. If you have a problem with the results of a model, look not at the size but at the data itself.
2. The paper takes one-sided political views, without presenting it as such and without presenting the alternative views
It’s obvious that the people writing and enjoying this paper have specific, left-wing political views that they infuse into their work. You can argue whether their bias is better or worse than other kinds of bias, but we shouldn’t give them the claim to objectivity they demand.
via @yoavgo